Skip to main content

RAG Document Search

Overview

By leveraging a RAG pipeline to help users query a given knowledge base corpus, the Assistant can provide a more reliable and accurate knowledge base search experience. This not only enhances the overall user experience but also ensures that users receive the most relevant and up-to-date information possible by providing source links to the provided answers.

A RAG pipeline for Document Search usually consists of a Data Repository, a Vector Database and a Large Language Model. This pipeline can be carried out as one of three patterns.

Pattern 1: Watson Discovery

This pattern consists of creating two integrations with Watson Discovery and Watsonx.ai. Watson Discovery is used to store and carry out searches on data collections. The native native search capability to pass relevant passages into an LLM prompt template to generate an answer to a user's query.

Required Integrations:

  • Watson Discovery
  • Watsonx.ai

RAG Pattern 1

Pattern 2: Watsonx Discovery with Elasticsearch

RAG Pattern 2